Goto

Collaborating Authors

 claire chatted


Robot Talk Episode 145 – Robotics and automation in manufacturing, with Agata Suwala

Robohub

Claire chatted to Agata Suwala from the Manufacturing Technology Centre about leveraging robotics to make manufacturing systems more sustainable. Agata Suwala is a Technology Manager at the Manufacturing Technology Centre, where she leads cutting-edge work in automation and robotics. With over a decade of experience in R&D, Agata specialises in developing and implementing advanced manufacturing systems--particularly for the aerospace sector--transforming complex, skill-intensive processes through automation. Her recent focus is on enabling the transition to a circular economy by leveraging automation and robotics to create sustainable, scalable technologies. Robot Talk is a weekly podcast that explores the exciting world of robotics, artificial intelligence and autonomous machines.


Reversible, detachable robotic hand redefines dexterity

Robohub

With its opposable thumb, multiple joints and gripping skin, human hands are often considered to be the pinnacle of dexterity, and many robotic hands are designed in their image. But having been shaped by the slow process of evolution, human hands are far from optimized, with the biggest drawbacks including our single, asymmetrical thumbs and attachment to arms with limited mobility. "We can easily see the limitations of the human hand when attempting to reach objects underneath furniture or behind shelves, or performing simultaneous tasks like holding a bottle while picking up a chip can," says Aude Billard, head of the Learning Algorithms and Systems Laboratory (LASA) in EPFL's School of Engineering. "Likewise, accessing objects positioned behind the hand while keeping the grip stable can be extremely challenging, requiring awkward wrist contortions or body repositioning." A team composed of Billard, LASA researcher Xiao Gao, and Kai Junge and Josie Hughes from the Computational Robot Design and Fabrication Lab designed a robotic hand that overcomes these challenges.


Robot, make me a chair

Robohub

"Robot, make me a chair" Computer-aided design (CAD) systems are tried-and-true tools used to design many of the physical objects we use each day. But CAD software requires extensive expertise to master, and many tools incorporate such a high level of detail they don't lend themselves to brainstorming or rapid prototyping. In an effort to make design faster and more accessible for non-experts, researchers from MIT and elsewhere developed an AI-driven robotic assembly system that allows people to build physical objects by simply describing them in words. Their system uses a generative AI model to build a 3D representation of an object's geometry based on the user's prompt. Then, a second generative AI model reasons about the desired object and figures out where different components should go, according to the object's function and geometry.


Robot Talk Episode 144 – Robot trust in humans, with Samuele Vinanzi

Robohub

Claire chatted to Samuele Vinanzi from Sheffield Hallam University about how robots can tell whether to trust or distrust people. Samuele Vinanzi is a Senior Lecturer in Robotics and Artificial Intelligence at Sheffield Hallam University. He specializes in Cognitive Robotics: an interdisciplinary field that integrates robotics, artificial intelligence, cognitive science, and psychology to create robots that perceive, reason, and interact like humans. His research focuses on enabling social collaboration between humans and robots, particularly emotional intelligence, intention reading, and artificial trust. His recent book, " In Robots We Trust ", explores trust relationships between humans and robots.


Robot Talk Episode 143 – Robots for children, with Elmira Yadollahi

Robohub

Claire chatted to Elmira Yadollahi from Lancaster University about how children interact with and relate to robots. Elmira Yadollahi is an Assistant Professor of Computer Science at Lancaster University. She has a joint PhD in robotics and computer science from EPFL in Switzerland and Instituto Superior Técnico in Portugal. Her research tackles explainability in robotics, as well as multimodal perception and explanation methods. Her core expertise is in child-robot interaction, with a focus on expectation management, trust, and AI literacy.

  Country:
  Industry: Leisure & Entertainment > Sports > Soccer (0.36)

New frontiers in robotics at CES 2026

Robohub

CES 2026 showed that humanoid and embodied AI systems still have a long way to go before delivering real-world value, particularly in homes. At the same time, there is a growing sense that the path to deployment is becoming clearer. A consensus has emerged across platforms: multi-camera perception, often wrist-mounted, paired with VLA models, is sufficient for most tasks. Increasingly, tactile hands and VTLA software are added. There was a clear split between industrial and home-care humanoids.


Robot Talk Episode 142 – Collaborative robot arms, with Mark Gray

Robohub

Mark Gray has worked in automation for the last 30 years, first involved in machine vision and robotics and finally collaborative robots or cobots. As country manager, Mark was the first person to work for Universal Robots in the UK and has carried out projects with many research institutes such as the Advanced Manufacturing Research Centre (AMRC), The Manufacturing Technology Centre (MTC), the National Robotarium, and Bristol Robotics Lab. Robot Talk is a weekly podcast that explores the exciting world of robotics, artificial intelligence and autonomous machines. Robot Talk is a weekly podcast that explores the exciting world of robotics, artificial intelligence and autonomous machines. In the latest episode of the Robot Talk podcast, Claire chatted to Razanne Abu-Aisheh from the University of Bristol about how people feel about interacting with robot swarms.


Robot Talk Episode 141 – Our relationship with robot swarms, with Razanne Abu-Aisheh

Robohub

Claire chatted to Razanne Abu-Aisheh from the University of Bristol about how people feel about interacting with robot swarms. Razanne Abu-Aisheh is a Senior Research Associate in the Centre for Sociodigital Futures at the University of Bristol. Her work explores how people interact with robot swarms, with a focus on how collective robot behaviours influence human perception. In her current research, she collaborates with communities to imagine more inclusive and meaningful futures with robotics, working towards community-centred design. Her broader interests include bringing robot swarms into real-world settings and designing them with people in mind.


Robot Talk Episode 140 – Robot balance and agility, with Amir Patel

Robohub

Amir Patel is an Associate Professor of Robotics & AI in the Department of Computer Science at University College London (UCL). His research uses robotics methods--sensor fusion, computer vision, mechanical modelling, and optimal control--to understand and quantify animal locomotion, especially high-speed predators such as the cheetah, and to translate these insights into bio-inspired machines. Previously, he served on the faculty of Electrical Engineering at the University of Cape Town, where he founded and directed the African Robotics Unit (ARU). Robot Talk is a weekly podcast that explores the exciting world of robotics, artificial intelligence and autonomous machines. Robot Talk is a weekly podcast that explores the exciting world of robotics, artificial intelligence and autonomous machines.


Robot Talk Episode 139 – Advanced robot hearing, with Christine Evers

Robohub

Claire chatted to Christine Evers from University of Southampton about helping robots understand the world around them through sound. Christine Evers is an Associate Professor in Computer Science and Director of the Centre for Robotics at the University of Southampton. Her research pushes the boundaries of machine listening, enabling robots to make sense of life in sound. Her current focus is embedding our understanding of the human auditory process into deep-learning audio architectures. This bio-inspired approach moves away from massive, internet-scale models toward compute-efficient and inherently interpretable systems - opening the door to a new generation of embodied auditory intelligence.